Universal linear least squares prediction: Upper and lower bounds

نویسندگان

  • Andrew C. Singer
  • Suleyman Serdar Kozat
  • Meir Feder
چکیده

We consider the problem of sequential linear prediction of real-valued sequences under the square-error loss function. For this problem, a prediction algorithm has been demonstrated [1]–[3] whose accumulated squared prediction error, for every bounded sequence, is asymptotically as small as the best fixed linear predictor for that sequence, taken from the class of all linear predictors of a given order . The redundancy, or excess prediction error above that of the best predictor for that sequence, is upper-bounded by ln( ) , where is the data length and the sequence is assumed to be bounded by some . In this correspondence, we provide an alternative proof of this result by connecting it with universal probability assignment. We then show that this predictor is optimal in a min–max sense, by deriving a corresponding lower bound, such that no sequential predictor can ever do better than a redundancy of ln( ) .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bounded-Variable Least-Squares: an Algorithm and Applications

The Fortran subroutine BVLS (bounded variable least-squares) solves linear least-squares problems with upper and lower bounds on the variables, using an active set strategy. The unconstrained least-squares problems for each candidate set of free variables are solved using the QR decomposition. BVLS has a “warm-start” feature permitting some of the variables to be initialized at their upper or l...

متن کامل

Statistical and Algorithmic Perspectives on Randomized Sketching for Ordinary Least-Squares

We consider statistical and algorithmic aspects of solving large-scale least-squares (LS) problems using randomized sketching algorithms. Prior results show that, from an algorithmic perspective, when using sketching matrices constructed from random projections and leverage-score sampling, if the number of samples r much smaller than the original sample size n, then the worst-case (WC) error is...

متن کامل

Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods

Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov subspaces of increasing dimensions. The minimum residual method (MINRES) [C. Paige and M. Saunders, SIAM J. Numer. Anal., 12 (1975), pp. 617–629] and generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIA...

متن کامل

Minimax rates of estimation for high-dimensional linear regression over $\ell_q$-balls

Consider the standard linear regression model Y = Xβ+w, where Y ∈ R is an observation vector, X ∈ R is a design matrix, β ∈ R is the unknown regression vector, and w ∼ N (0, σI) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimation of β for lp-losses and in the l2-prediction loss, assuming that β belongs to an lq-ball Bq(Rq) for some q ∈ [0, 1]. We show ...

متن کامل

H " Bounds for the Recursive - Least - Squares Algorithm *

We obtain upper and lower bounds for the H" norm of the RLS (Recursive-Least-Squares) algorithm. The H" norm may be regarded aa the worst-case energy gain from the disturbances to the prediction errors, and is therefore a measure of the robustness of an algorithm to perturbations and model uncertainty. Our results allow one to compare the robustness of RLS compared to the LMS (Least-Mean-Square...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 48  شماره 

صفحات  -

تاریخ انتشار 2002